(G+M) Mark Kingwell–Artificial intelligence in 2017 means respect, not fear

…fear remains the dominant emotion when humans talk about technological change. Are self-driving cars better described as self-crashing? Is the Internet of Things, where we eagerly allow information-stealing algorithms into our rec rooms and kitchens, the end of privacy? Is the Singularity imminent?

But fright is closely seconded by wonder. Your smartphone makes Deep Blue look, as Mr. [Gary] Kasparov has said, like an alarm clock. In your pocket lies computing power exponentially greater than a Cray supercomputer from the 1970s that occupied an entire room and required an elaborate cooling system. Look at all the things I can do, not to mention dates I can make, while walking heedlessly down the sidewalk! This is familiar terrain. The debate about artificial intelligence is remarkable for not being a debate at all but rather, as with Trump-era politics or the cultural-appropriation issue, a series of conceptual standoffs. Can we get past the typical stalemates and break some new ground on artificial intelligence?

I think we can, and Mr. Kasparov himself makes the first part of the argument. We can program non-human systems, he notes, to do what we already know how to do. Deep Blue won against him using brute force surveys of possible future moves, something human players do less quickly. But when it comes to things we humans don’t understand about ourselves, and so can’t translate into code, the stakes are different. Intuition, creativity, empathy – these are qualities of the human mind that the mind itself cannot map. To use Julian Jaynes’s memorable image, we are like flashlights, illuminating the external world but not the mechanisms by which we perceive it.

Read it all.

print

Posted in Canada, Philosophy, Science & Technology